我们开发了包含几何信息和拓扑信息的数据驱动方法,以从观察值中学习非线性动力学的简约表示。我们开发了使用与变异自动编码器(VAE)相关的训练策略来学习一般歧管潜在空间动力学的非线性状态空间模型的方法。我们的方法称为几何动力学(GD)变化自动编码器(GD-VAE)。我们根据包括一般多层感知器(MLP),卷积神经网络(CNNS)和转置CNN(T-CNN)在内的深层神经网络体系结构学习系统状态和进化的编码器和分解器。由参数化的PDE和物理学引起的问题的促进,我们研究了我们在学习非线性汉堡方程,约束机械系统和反应扩散系统的空间场的低维表示任务方面的性能。 GD-VAE提供了用于获取表示涉及动态任务的表示形式的方法。
translated by 谷歌翻译
Migraine is a high-prevalence and disabling neurological disorder. However, information migraine management in real-world settings could be limited to traditional health information sources. In this paper, we (i) verify that there is substantial migraine-related chatter available on social media (Twitter and Reddit), self-reported by migraine sufferers; (ii) develop a platform-independent text classification system for automatically detecting self-reported migraine-related posts, and (iii) conduct analyses of the self-reported posts to assess the utility of social media for studying this problem. We manually annotated 5750 Twitter posts and 302 Reddit posts. Our system achieved an F1 score of 0.90 on Twitter and 0.93 on Reddit. Analysis of information posted by our 'migraine cohort' revealed the presence of a plethora of relevant information about migraine therapies and patient sentiments associated with them. Our study forms the foundation for conducting an in-depth analysis of migraine-related information using social media data.
translated by 谷歌翻译
The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
Reliable application of machine learning-based decision systems in the wild is one of the major challenges currently investigated by the field. A large portion of established approaches aims to detect erroneous predictions by means of assigning confidence scores. This confidence may be obtained by either quantifying the model's predictive uncertainty, learning explicit scoring functions, or assessing whether the input is in line with the training distribution. Curiously, while these approaches all state to address the same eventual goal of detecting failures of a classifier upon real-life application, they currently constitute largely separated research fields with individual evaluation protocols, which either exclude a substantial part of relevant methods or ignore large parts of relevant failure sources. In this work, we systematically reveal current pitfalls caused by these inconsistencies and derive requirements for a holistic and realistic evaluation of failure detection. To demonstrate the relevance of this unified perspective, we present a large-scale empirical study for the first time enabling benchmarking confidence scoring functions w.r.t all relevant methods and failure sources. The revelation of a simple softmax response baseline as the overall best performing method underlines the drastic shortcomings of current evaluation in the abundance of publicized research on confidence scoring. Code and trained models are at https://github.com/IML-DKFZ/fd-shifts.
translated by 谷歌翻译
Using 3D CNNs on high resolution medical volumes is very computationally demanding, especially for large datasets like the UK Biobank which aims to scan 100,000 subjects. Here we demonstrate that using 2D CNNs on a few 2D projections (representing mean and standard deviation across axial, sagittal and coronal slices) of the 3D volumes leads to reasonable test accuracy when predicting the age from brain volumes. Using our approach, one training epoch with 20,324 subjects takes 40 - 70 seconds using a single GPU, which is almost 100 times faster compared to a small 3D CNN. These results are important for researchers who do not have access to expensive GPU hardware for 3D CNNs.
translated by 谷歌翻译
An effective aggregation of node features into a graph-level representation via readout functions is an essential step in numerous learning tasks involving graph neural networks. Typically, readouts are simple and non-adaptive functions designed such that the resulting hypothesis space is permutation invariant. Prior work on deep sets indicates that such readouts might require complex node embeddings that can be difficult to learn via standard neighborhood aggregation schemes. Motivated by this, we investigate the potential of adaptive readouts given by neural networks that do not necessarily give rise to permutation invariant hypothesis spaces. We argue that in some problems such as binding affinity prediction where molecules are typically presented in a canonical form it might be possible to relax the constraints on permutation invariance of the hypothesis space and learn a more effective model of the affinity by employing an adaptive readout function. Our empirical results demonstrate the effectiveness of neural readouts on more than 40 datasets spanning different domains and graph characteristics. Moreover, we observe a consistent improvement over standard readouts (i.e., sum, max, and mean) relative to the number of neighborhood aggregation iterations and different convolutional operators.
translated by 谷歌翻译
The SNMMI Artificial Intelligence (SNMMI-AI) Summit, organized by the SNMMI AI Task Force, took place in Bethesda, MD on March 21-22, 2022. It brought together various community members and stakeholders from academia, healthcare, industry, patient representatives, and government (NIH, FDA), and considered various key themes to envision and facilitate a bright future for routine, trustworthy use of AI in nuclear medicine. In what follows, essential issues, challenges, controversies and findings emphasized in the meeting are summarized.
translated by 谷歌翻译
ICECUBE是一种用于检测1 GEV和1 PEV之间大气和天体中微子的光学传感器的立方公斤阵列,该阵列已部署1.45 km至2.45 km的南极的冰盖表面以下1.45 km至2.45 km。来自ICE探测器的事件的分类和重建在ICeCube数据分析中起着核心作用。重建和分类事件是一个挑战,这是由于探测器的几何形状,不均匀的散射和冰中光的吸收,并且低于100 GEV的光,每个事件产生的信号光子数量相对较少。为了应对这一挑战,可以将ICECUBE事件表示为点云图形,并将图形神经网络(GNN)作为分类和重建方法。 GNN能够将中微子事件与宇宙射线背景区分开,对不同的中微子事件类型进行分类,并重建沉积的能量,方向和相互作用顶点。基于仿真,我们提供了1-100 GEV能量范围的比较与当前ICECUBE分析中使用的当前最新最大似然技术,包括已知系统不确定性的影响。对于中微子事件分类,与当前的IceCube方法相比,GNN以固定的假阳性速率(FPR)提高了信号效率的18%。另外,GNN在固定信号效率下将FPR的降低超过8(低于半百分比)。对于能源,方向和相互作用顶点的重建,与当前最大似然技术相比,分辨率平均提高了13%-20%。当在GPU上运行时,GNN能够以几乎是2.7 kHz的中位数ICECUBE触发速率的速率处理ICECUBE事件,这打开了在在线搜索瞬态事件中使用低能量中微子的可能性。
translated by 谷歌翻译
目的:本研究评估了市售可解释的AI算法在增强临床医生在胸部X射线(CXR)上鉴定肺癌的能力的影响。设计:这项回顾性研究评估了11位临床医生在胸部X光片中检测肺癌的表现,并在有和没有市售的AI算法的帮助下(红点,观察到),预测CXRS可疑的肺癌。根据临床确定的诊断评估了临床医生的表现。设置:该研究分析了NHS医院的匿名患者数据;该数据集由成年患者(18岁及以上)的400张胸部X光片组成,他们在2020年进行了CXR,并提供相应的临床文本报告。参与者:由11位临床医生(放射科医生,放射科医生受训者和报告射线照相师)组成的读者小组参加。主要结果指标:临床医生在CXR上检测肺癌的总体准确性,敏感性,特异性和精度,有或没有AI输入。还评估了有或没有AI输入的临床医生与绩效标准偏差之间的协议率。结果:临床医生对AI算法的使用导致肺部肿瘤检测的总体性能提高,从而达到了在CXR上鉴定出的肺癌的总体增长17.4% ,分别增加了13%和13%的阶段1和2期肺癌的检测,以及临床医生表现的标准化。结论:这项研究在AI算法的临床实用性方面表现出了巨大的希望,可以通过整体改善读者表现来改善早期肺癌诊断和促进健康平等,而不会影响下游成像资源。
translated by 谷歌翻译
为了识别动态网络中嵌入的系统(模块),必须制定一个多输入估计问题,该问题需要测量某些节点并将其作为预测输入。但是,由于传感器选择和放置问题,在许多实际情况下,其中一些节点可能无法测量。这可能会导致目标模块的偏差估计。此外,与多输入结构相关的识别问题可能需要确定实验者不特别感兴趣的大量参数,并且在大型网络中的计算复杂性增加。在本文中,我们通过使用数据增强策略来解决这些问题,该策略使我们能够重建缺失的节点测量并提高估计目标模块的准确性。为此,我们使用基于正规化的基于内核的方法和近似推理方法开发了系统识别方法。为感兴趣的模块保留一个参数模型,我们将其他模块作为高斯过程(GP)建模,并用所谓的稳定样条核给出的内核。经验贝叶斯(EB)方法用于估计目标模块的参数。相关的优化问题是使用预期最大化(EM)方法来解决的,在该方法中,我们采用马尔可夫链蒙特卡洛(MCMC)技术来重建未知的缺失节点信息和网络动力学。动态网络示例上的数值模拟说明了开发方法的电势。
translated by 谷歌翻译